扩散模型对图像的生成建模表现出令人印象深刻的性能。在本文中,我们提出了一种基于扩散模型的新型语义分段方法。通过修改培训和采样方案,我们表明扩散模型可以执行医学图像的病变分割。为了生成图像特定的分割,我们在地面真实分割上培训模型,并在采样过程中使用图像作为先前的图像。通过给定的随机抽样过程,我们可以生成分割面罩的分布。此属性允许我们计算分割的像素方面的不确定性地图,并允许增加分段性能的分段内隐式集合。我们评估我们在Brats2020数据集上进行脑肿瘤细分的方法。与最先进的分割模型相比,我们的方法产生了良好的细分结果,另外,有意义地,有意义的不确定性地图。
translated by 谷歌翻译
大型图像数据集的有限可用性是在医学中开发准确宽大的机器学习方法的主要问题。数据量的限制主要是由于使用不同的采集协议,不同的硬件和数据隐私。同时,培训小型数据集的分类模型会导致模型的较差质量差。为了克服这个问题,通常使用不同出处的各种图像数据集的组合,例如,多站点研究。然而,如果附加数据集不包括任务的所有类别,则可以将分类模型的学习偏置到设备或获取地点。磁共振(MR)图像特别是磁共振(MR)图像的情况,其中不同的MR扫描仪引入限制模型性能的偏差。在本文中,我们提出了一种新颖的方法,该方法学习忽略图像中存在的扫描仪相关的特征,同时学习与分类任务相关的功能。我们专注于真实世界的情景,只有一个小型数据集提供所有类的图像。我们通过对潜伏空间引入特定的额外限制来利用这种情况,这引起了对疾病相关而非扫描仪的特征的关注。我们的方法学会在多站点MRI数据集上忽略优于艺术域的最新域适应方法,在多发性硬化患者和健康受试者之间的分类任务上。
translated by 谷歌翻译
目的:单个骨骼的本地化和细分是许多计划和导航应用程序中重要的预处理步骤。但是,如果手动完成,这是一项耗时和重复的任务。这不仅对于临床实践,而且对于获取培训数据都是正确的。因此,我们不仅提出了一种端到端学习的算法,该算法能够在上身CT中分割125个不同的骨骼,而且还提供了基于合奏的不确定性度量,有助于单张扫描以扩大训练数据集。方法我们使用受3D-UNET和完全监督培训启发的神经网络体系结构创建全自动的端到端学习细分。使用合奏和推理时间扩展改进结果。我们研究了合奏 - 不确定性与未标记的扫描的前瞻性用途,这是培训数据集的一部分。结果:我们的方法在16个上体CT扫描的内部数据集上进行评估,每个维度的分辨率为\ si {2} {\ milli \ meter}。考虑到我们标签集中的所有125个骨头,我们最成功的合奏中位数骰子得分系数为0.83。我们发现扫描的集合不确定性与其对扩大训练集中获得的准确性的前瞻性影响之间缺乏相关性。同时,我们表明集成不确定性与初始自动分割后需要手动校正的体素数量相关,从而最大程度地降低了最终确定新的地面真实分段所需的时间。结论:结合结合,集合不确定性低的扫描需要更少的注释时间,同时产生类似的未来DSC改进。因此,它们是扩大从CT扫描的上身不同骨分割的训练集的理想候选者。 }
translated by 谷歌翻译
人工智能被出现为众多临床应用诊断和治疗决策的有用援助。由于可用数据和计算能力的快速增加,深度神经网络的性能与许多任务中的临床医生相同或更好。为了符合信任AI的原则,AI系统至关重要的是透明,强大,公平和确保责任。由于对决策过程的具体细节缺乏了解,目前的深神经系统被称为黑匣子。因此,需要确保在常规临床工作流中纳入常规神经网络之前的深度神经网络的可解释性。在这一叙述审查中,我们利用系统的关键字搜索和域专业知识来确定已经基于所产生的解释和技术相似性的类型的医学图像分析应用的深度学习模型来确定九种不同类型的可解释方法。此外,我们报告了评估各种可解释方法产生的解释的进展。最后,我们讨论了局限性,提供了利用可解释性方法和未来方向的指导,了解医学成像分析深度神经网络的解释性。
translated by 谷歌翻译
There are multiple scales of abstraction from which we can describe the same image, depending on whether we are focusing on fine-grained details or a more global attribute of the image. In brain mapping, learning to automatically parse images to build representations of both small-scale features (e.g., the presence of cells or blood vessels) and global properties of an image (e.g., which brain region the image comes from) is a crucial and open challenge. However, most existing datasets and benchmarks for neuroanatomy consider only a single downstream task at a time. To bridge this gap, we introduce a new dataset, annotations, and multiple downstream tasks that provide diverse ways to readout information about brain structure and architecture from the same image. Our multi-task neuroimaging benchmark (MTNeuro) is built on volumetric, micrometer-resolution X-ray microtomography images spanning a large thalamocortical section of mouse brain, encompassing multiple cortical and subcortical regions. We generated a number of different prediction challenges and evaluated several supervised and self-supervised models for brain-region prediction and pixel-level semantic segmentation of microstructures. Our experiments not only highlight the rich heterogeneity of this dataset, but also provide insights into how self-supervised approaches can be used to learn representations that capture multiple attributes of a single image and perform well on a variety of downstream tasks. Datasets, code, and pre-trained baseline models are provided at: https://mtneuro.github.io/ .
translated by 谷歌翻译
The purpose of this work was to tackle practical issues which arise when using a tendon-driven robotic manipulator with a long, passive, flexible proximal section in medical applications. A separable robot which overcomes difficulties in actuation and sterilization is introduced, in which the body containing the electronics is reusable and the remainder is disposable. A control input which resolves the redundancy in the kinematics and a physical interpretation of this redundancy are provided. The effect of a static change in the proximal section angle on bending angle error was explored under four testing conditions for a sinusoidal input. Bending angle error increased for increasing proximal section angle for all testing conditions with an average error reduction of 41.48% for retension, 4.28% for hysteresis, and 52.35% for re-tension + hysteresis compensation relative to the baseline case. Two major sources of error in tracking the bending angle were identified: time delay from hysteresis and DC offset from the proximal section angle. Examination of these error sources revealed that the simple hysteresis compensation was most effective for removing time delay and re-tension compensation for removing DC offset, which was the primary source of increasing error. The re-tension compensation was also tested for dynamic changes in the proximal section and reduced error in the final configuration of the tip by 89.14% relative to the baseline case.
translated by 谷歌翻译
Compliance in actuation has been exploited to generate highly dynamic maneuvers such as throwing that take advantage of the potential energy stored in joint springs. However, the energy storage and release could not be well-timed yet. On the contrary, for multi-link systems, the natural system dynamics might even work against the actual goal. With the introduction of variable stiffness actuators, this problem has been partially addressed. With a suitable optimal control strategy, the approximate decoupling of the motor from the link can be achieved to maximize the energy transfer into the distal link prior to launch. However, such continuous stiffness variation is complex and typically leads to oscillatory swing-up motions instead of clear launch sequences. To circumvent this issue, we investigate decoupling for speed maximization with a dedicated novel actuator concept denoted Bi-Stiffness Actuation. With this, it is possible to fully decouple the link from the joint mechanism by a switch-and-hold clutch and simultaneously keep the elastic energy stored. We show that with this novel paradigm, it is not only possible to reach the same optimal performance as with power-equivalent variable stiffness actuation, but even directly control the energy transfer timing. This is a major step forward compared to previous optimal control approaches, which rely on optimizing the full time-series control input.
translated by 谷歌翻译
The previous fine-grained datasets mainly focus on classification and are often captured in a controlled setup, with the camera focusing on the objects. We introduce the first Fine-Grained Vehicle Detection (FGVD) dataset in the wild, captured from a moving camera mounted on a car. It contains 5502 scene images with 210 unique fine-grained labels of multiple vehicle types organized in a three-level hierarchy. While previous classification datasets also include makes for different kinds of cars, the FGVD dataset introduces new class labels for categorizing two-wheelers, autorickshaws, and trucks. The FGVD dataset is challenging as it has vehicles in complex traffic scenarios with intra-class and inter-class variations in types, scale, pose, occlusion, and lighting conditions. The current object detectors like yolov5 and faster RCNN perform poorly on our dataset due to a lack of hierarchical modeling. Along with providing baseline results for existing object detectors on FGVD Dataset, we also present the results of a combination of an existing detector and the recent Hierarchical Residual Network (HRN) classifier for the FGVD task. Finally, we show that FGVD vehicle images are the most challenging to classify among the fine-grained datasets.
translated by 谷歌翻译
The task of reconstructing 3D human motion has wideranging applications. The gold standard Motion capture (MoCap) systems are accurate but inaccessible to the general public due to their cost, hardware and space constraints. In contrast, monocular human mesh recovery (HMR) methods are much more accessible than MoCap as they take single-view videos as inputs. Replacing the multi-view Mo- Cap systems with a monocular HMR method would break the current barriers to collecting accurate 3D motion thus making exciting applications like motion analysis and motiondriven animation accessible to the general public. However, performance of existing HMR methods degrade when the video contains challenging and dynamic motion that is not in existing MoCap datasets used for training. This reduces its appeal as dynamic motion is frequently the target in 3D motion recovery in the aforementioned applications. Our study aims to bridge the gap between monocular HMR and multi-view MoCap systems by leveraging information shared across multiple video instances of the same action. We introduce the Neural Motion (NeMo) field. It is optimized to represent the underlying 3D motions across a set of videos of the same action. Empirically, we show that NeMo can recover 3D motion in sports using videos from the Penn Action dataset, where NeMo outperforms existing HMR methods in terms of 2D keypoint detection. To further validate NeMo using 3D metrics, we collected a small MoCap dataset mimicking actions in Penn Action,and show that NeMo achieves better 3D reconstruction compared to various baselines.
translated by 谷歌翻译
Rigorous guarantees about the performance of predictive algorithms are necessary in order to ensure their responsible use. Previous work has largely focused on bounding the expected loss of a predictor, but this is not sufficient in many risk-sensitive applications where the distribution of errors is important. In this work, we propose a flexible framework to produce a family of bounds on quantiles of the loss distribution incurred by a predictor. Our method takes advantage of the order statistics of the observed loss values rather than relying on the sample mean alone. We show that a quantile is an informative way of quantifying predictive performance, and that our framework applies to a variety of quantile-based metrics, each targeting important subsets of the data distribution. We analyze the theoretical properties of our proposed method and demonstrate its ability to rigorously control loss quantiles on several real-world datasets.
translated by 谷歌翻译